Chained DLS-ICBP Neural Networks with

نویسنده

  • Qun Chen
چکیده

Based on the circular back-propagation (CBP) network, the improved circular back-propagation(ICBP) neural network was previously put forward and exhibits more general architecture than the former. It has a favorable characteristic that ICBP is better than CBP in generalization and adaptation though the number of its adaptable weights is generally less than that of CBP. The forecasting experiments on chaotic time series, multiple-input multiple-output (MIMO) systems and the data sets of daily life water consumed quantity have proved that ICBP has better capabilities of prediction and approximation than CBP. But in the above predicting process, ICBP neglects inherent structural changes and time correlation in time series themselves. In other words, they do not take into account the influence of different distances between observations and the predicting point on forecasting performance. The principle of discounted least square (DLS) formulates this influence exactly. In this paper, the DLS principle is borrowed to construct the learning algorithm of DLS-ICBP. On this basis we construct chained DLS-ICBP neural networks by combining a new kind of chain structure to DLS-ICBP and investigate multiple steps time series prediction. We prove that DLS-ICBP has better single and multiple step predictive capabilities than ICBP through experiments on the data sets of Benchmarks and water consumed quantity. Keywords—Neural networks, Chain structure, Discounted least square, Improved circular back-propagation, Multiple steps time series prediction . Introduction As a generalization of multilayer perceptron (MLP) and a more general network model analogous to BP, circular backpropagation (CBP) adds to its input layer an extra node having an input of the sum of the squared input components. It possesses favorable capabilities in generalization and adaptability. Under its frame, the vector quantization (VQ) and the radial basis function (RBF) networks are constructed, which shows great flexibility. Retaining the original structure of CBP, we obtain a more general network model–ICBP through a special construction to the extra node in CBP input layer and special evaluations to the weights between the node and the hidden layer. Firstly, in addition to CBP characteristic of constructive equivalence to VQ and RBF, ICBP can simulate the famous Bayesian classifier in a constructive way. Secondly, although ICBP has less adaptable weights than CBP, it is better in generalization and adaptation than CBP. Thirdly, it still adopts the BP learning algorithm with learning complexity equal to CBP. Testing results show that ICBP possesses better predictive and approximative capabilities than CBP does. However, during predicting process ICBP neglects inherent structural changes and time correlation in time series itself. Intuitively, predicting point has stronger correlation to observations closer to it and weaker one to those far away from it. Therefore in training process samples in time window impose different influences on network weights: the nearer is the observation from the predicting point, the greater is the influence. Moreover the idea of discounted least square formulates exactly this influence . To make ICBP embody the above characteristic, we bring forward a DLS-ICBP, based upon DLS and oriented to time series prediction, introducing DLS to its cost function. DLS cost function biases learning towards most recent observations in a time series but without ignoring long term effects. The experiments of non-stationary covariance and certain city’s water consumed quantity time series prediction indicate that DLS improves ICBP performance. When neural networks are trained to predict signals p steps ahead, the quality of the prediction typically decreases for large values of p . One of the reasons for this is the fact that the information in the inputs does not contain much information about the output, if this output lies far ahead in the future. Therefore Duhoux and Suykens used a new kind of neural network chain in their experiments and concluded that this network chain leads to an improved prediction of the temperature. In the paper this kind of chain is adopted to constitute chained DLS-ICBP neural networks. This paper is organized as follows. In Section , we will explain how DLS-ICBP is formed. In Section , we will illuminate how chained DLS-ICBP is built. In Section , we will give results on experimental data. . DLS-ICBP Network A. ICBP network Fig.1 shows a three-layer ICBP network with output nodes, hidden nodes, d input nodes with respect to d dimensional input pattern or vector and an extra input node with input being . In particular, when all are taken equally, ICBP reduces to the CBP. And at the same time, ICBP weights connecting the extra node to hidden layer differ from CBP ones: for ICBP take a common constant directly while the counterparts are adaptable parameters. Consequently, the discrepancy of the number of adaptable parameters for these two models is O N h N

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

DLS-ICBP Neural Networks with Applications in Time Series Prediction

Songcan Chen Qun Dai (Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, P.R.China, 210016) Corresponding author: [email protected] [email protected] Tel: +86-25-8489-2805, Fax: +86-25-84498069 Abstract: As a generalization to multi-layer perceptron (MLP), circular back-propagation neural network (CBP) possesses better adaptability. On...

متن کامل

The build of n-Bits Binary Coding ICBP Ensemble System

To optimize the construct of an ensemble of many relatively small and simple classifiers may be more realistic than to optimize the design of an individual large and complex classifier. Problems of local minima and slow convergence may be mitigated within an ensemble system where the decisions obtained from locally optimal components classifiers are integrated together. However, it is very diff...

متن کامل

A proposal for describing services with DLs

Motivated by the semantic web application, we present a generic extension of description logics to describe actions. These actions can then be chained to service descriptions. A web page providing a service can be annotated with a description of this service, which can then be taken into account by agents searching for a web service. Besides syntax and semantics of this extension of DLs, we def...

متن کامل

Ratio of Drug/carrier as Dominant Factor in Determining Size of Doxorubicin-Loaded Beta-1,3- Glucan Nanoparticles: An Artificial Neural Networks Study

Size of nanoparticles is an important parameter in determining many of their properties. In this work, nanoparticles of β-1,3-glucan containing doxorubicin (Dox) in conjugated and unconjugated forms (Con-Dox-Glu and Un-Dox-Glu, respectively) were prepared. Then, artificial neural networks (ANNs) were used to find the effect of different formulation/processing parameters on their particle size, ...

متن کامل

Transactions of the Institute of Measurement and Control

In this paper, we develop fixed-final time nearly optimal control laws for a class of non-holonomic chained form systems by using neural networks to approximately solve a Hamilton–Jacobi–Bellman equation. A certain time-folding method is applied to recover uniform complete controllability for the chained form system. This method requires an innovative design of a certain dynamic control compone...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005